702 research outputs found
Quantum Associative Memory
This paper combines quantum computation with classical neural network theory
to produce a quantum computational learning algorithm. Quantum computation uses
microscopic quantum level effects to perform computational tasks and has
produced results that in some cases are exponentially faster than their
classical counterparts. The unique characteristics of quantum theory may also
be used to create a quantum associative memory with a capacity exponential in
the number of neurons. This paper combines two quantum computational algorithms
to produce such a quantum associative memory. The result is an exponential
increase in the capacity of the memory when compared to traditional associative
memories such as the Hopfield network. The paper covers necessary high-level
quantum mechanical and quantum computational ideas and introduces a quantum
associative memory. Theoretical analysis proves the utility of the memory, and
it is noted that a small version should be physically realizable in the near
future
A Quantum Computational Learning Algorithm
An interesting classical result due to Jackson allows polynomial-time
learning of the function class DNF using membership queries. Since in most
practical learning situations access to a membership oracle is unrealistic,
this paper explores the possibility that quantum computation might allow a
learning algorithm for DNF that relies only on example queries. A natural
extension of Fourier-based learning into the quantum domain is presented. The
algorithm requires only an example oracle, and it runs in O(sqrt(2^n)) time, a
result that appears to be classically impossible. The algorithm is unique among
quantum algorithms in that it does not assume a priori knowledge of a function
and does not operate on a superposition that includes all possible states.Comment: This is a reworked and improved version of a paper originally
entitled "Quantum Harmonic Sieve: Learning DNF Using a Classical Example
Oracle
Reducing the Effects of Detrimental Instances
Not all instances in a data set are equally beneficial for inducing a model
of the data. Some instances (such as outliers or noise) can be detrimental.
However, at least initially, the instances in a data set are generally
considered equally in machine learning algorithms. Many current approaches for
handling noisy and detrimental instances make a binary decision about whether
an instance is detrimental or not. In this paper, we 1) extend this paradigm by
weighting the instances on a continuous scale and 2) present a methodology for
measuring how detrimental an instance may be for inducing a model of the data.
We call our method of identifying and weighting detrimental instances reduced
detrimental instance learning (RDIL). We examine RIDL on a set of 54 data sets
and 5 learning algorithms and compare RIDL with other weighting and filtering
approaches. RDIL is especially useful for learning algorithms where every
instance can affect the classification boundary and the training instances are
considered individually, such as multilayer perceptrons trained with
backpropagation (MLPs). Our results also suggest that a more accurate estimate
of which instances are detrimental can have a significant positive impact for
handling them.Comment: 6 pages, 5 tables, 2 figures. arXiv admin note: substantial text
overlap with arXiv:1403.189
- …